This paper introduces a novel methodology for training an event-driven classifier within\uda Spiking Neural Network (SNN) System capable of yielding good classification results\udwhen using both synthetic input data and real data captured from Dynamic Vision Sensor\ud(DVS) chips. The proposed supervised method uses the spiking activity provided by an\udarbitrary topology of prior SNN layers to build histograms and train the classifier in the\udframe domain using the stochastic gradient descent algorithm. In addition, this approach\udcan cope with leaky integrate-and-fire neuron models within the SNN, a desirable feature\udfor real-world SNN applications, where neural activation must fade away after some\udtime in the absence of inputs. Consequently, this way of building histograms captures\udthe dynamics of spikes immediately before the classifier. We tested our method on\udthe MNIST data set using different synthetic encodings and real DVS sensory data\udsets such as N-MNIST, MNIST-DVS, and Poker-DVS using the same network topology\udand feature maps. We demonstrate the effectiveness of our approach by achieving\udthe highest classification accuracy reported on the N-MNIST (97.77%) and Poker-DVS\ud(100%) real DVS data sets to date with a spiking convolutional network. Moreover, by\udusing the proposed method we were able to retrain the output layer of a previously\udreported spiking neural network and increase its performance by 2%, suggesting that\udthe proposed classifier can be used as the output layer in works where features are\udextracted using unsupervised spike-based learningmethods. In addition, we also analyze\udSNN performance figures such as total event activity and network latencies, which\udare relevant for eventual hardware implementations. In summary, the paper aggregates\udunsupervised-trained SNNs with a supervised-trained SNN classifier, combining and\udapplying them to heterogeneous sets of benchmarks, both synthetic and from real DVS\udchips.
展开▼